![]() METHOD AND DEVICE FOR CALIBRATING A PERCEPTION SYSTEM COMPRISING A SET OF LIDAR TELEMETERS
专利摘要:
The invention relates to a method for determining the extrinsic parameters of an on-board perception system (100) traversing a path and comprising a set of range finders (L1, L2) providing frame flows to a processing device (110). ), this method comprising the determination of the laying of the vehicle as a function of time, and being characterized in that it comprises the detection of bitters within the frame flows corresponding to real-world bitter, and then the determination of the parameters extrinsic minimizing a cost function measuring the differences between the detected bitter associated with the same bitter of the real world. 公开号:FR3062507A1 申请号:FR1750802 申请日:2017-01-31 公开日:2018-08-03 发明作者:Michel Dhome;Eric Royer;Morgan Slade 申请人:Centre National de la Recherche Scientifique CNRS;Universite Clermont Auvergne; IPC主号:
专利说明:
® Agent (s): NOVAGRAAF TECHNOLOGIES. FR 3 062 507 - A1 (54) METHOD AND DEVICE FOR THE ASSEMBLY CALIBRATION OF LIDAR TELEMETERS. (57) The invention relates to a method for determining the extrinsic parameters of a perception system on board a vehicle (100) traveling a route and comprising a set of rangefinders (L1, L2) supplying streams of frames to a device for processing (110), this method comprising the determination of the pose of the vehicle as a function of time, and being characterized in that it comprises the detection of bitters within the flow of frames corresponding to bitters in the real world, then the determination of extrinsic parameters minimizing a cost function measuring the differences between the detected landmarks associated with the same real world landmark. OF A COLLECTION SYSTEM COMPRISING A METHOD AND DEVICE FOR CALIBRATING A COLLECTION SYSTEM INCLUDING A SET OF LIDAR TELEMETERS FIELD OF THE INVENTION The present invention relates to a method for calibrating a computer perception system on board a vehicle comprising one or more laser remote sensing devices. It applies particularly well to autonomous vehicles and to the mechanisms of simultaneous mapping and localization, or SLAM (for "Simultaneous Localization and Mappings" in English) used for these vehicles BACKGROUND OF THE INVENTION Simultaneous mapping and location mechanisms allow autonomous vehicles to map and locate the environment in which it is traveling. These mechanisms are most often designated by their acronyms in English: SLAM or CML (for "Concurrent mapping and Localization"). To do this, autonomous vehicles have a computer perception system. These perception systems are very generally made up of a set of sensors and a central device for processing the signals acquired by the sensors to determine, or improve, a map of the environment in which the vehicle is traveling, and locate it there. These sensors very often include cameras providing video streams to the processing device. The latter can then determine in the video stream points or structures of characteristic points, which are usually called "bitter", and which make it possible, by correlation between different video streams, to construct the cartography of the environment and to determine the installation of the vehicle in this environment, that is to say its location and its orientation. In order to improve the performance of these computer-based perception systems, other types of sensors can be embedded, including LIDAR ("Light Detection And Raning" in English). LIDARs, or lidars, are laser telemetry devices, based on the measurement of the properties of the beam of light returned by an obstacle encountered by the emitted beam. A well-known application of the joint use of video cameras and lidars is "Google Street View ™" In order to determine, or improve, a map, the processing device must know the calibration parameters of the various sensors. Knowledge of these parameters is also crucial for the precise location of obstacles detected in the case of autonomous navigation of a vehicle. These calibration parameters include intrinsic calibration parameters, i.e. specific to each sensor. For a video camera, these intrinsic parameters can include the focal length, the position of the main point, the center of the image, the distortion coefficients ... Since a computer perception system has more than one sensor, it is also necessary to know the extrinsic calibration parameters. These extrinsic parameters correspond to the parameters for passing from a reference mark of one sensor to the reference mark of another sensor. This change of coordinate system can be defined by 6 extrinsic parameters: 3 translation parameters and 3 rotation parameters. This model used for the knowledge of the calibration parameters of the sensors requires that the sensors are linked to each other rigidly, that is to say, typically, that they are rigidly secured to the autonomous vehicle. Consequently, the parameters are fixed over time and knowledge of the intrinsic and extrinsic calibration parameters allows the processing device to determine a non-distorted mapping from the data supplied by the various sensors. Different techniques can be used for understanding extrinsic parameters. A first technique may consist in determining these parameters by measuring them directly on the vehicle. However, the sensors can be located in housings making their access difficult for an accurate measurement. In particular, it is very difficult to measure the orientation of the sensors. Another technique, which is widely used, consists of using test patterns, and determining extrinsic parameters by comparing the perception of the same test pattern within the data streams coming from the various sensors. However, this approach requires human intervention. This human intervention is a drawback, but can even be a crippling obstacle in an industrial context, if one wishes to periodically update the extrinsic calibration parameters: this technique then requires taking the vehicle out of its operation to subject it a recalibration step in a dedicated space using human resources. In addition, this approach requires overlapping fields, that is to say that the same target must be perceived by several sensors for the same installation of the vehicle. This is a strong constraint based on the design of the vehicle and the on-board perception system. Fully automatic solutions have been proposed for the calibration of extrinsic parameters for video cameras. Such a solution is explained in particular in patent application WO2013 / 053701, entitled "Method for calibrating a vision system by computer on board a mobile phone". But this solution only concerns vision and in no way another mode of perception, in particular by lidar. SUMMARY OF THE INVENTION The object of the present invention is to provide a solution which at least partially overcomes the aforementioned drawbacks. More particularly, the invention aims to provide a method of calibrating a computer perception system having non-video sensors (for example lidars), which is automatic (that is to say without human intervention) and which does not require overlapping fields between its different sensors. To this end, the present invention provides a method for determining the extrinsic parameters of a perception system on board a vehicle traveling a route and comprising a set of rangefinders providing flow of frames to a processing device, said method comprising determining the laying of said vehicle as a function of time, and being characterized in that it comprises the detection of bitters within said flow of frames corresponding to bitters in the real world, then the determination of extrinsic parameters minimizing a cost function measuring the differences between the bitters detected associated with the same bitter in the real world. According to preferred embodiments, the invention comprises one or more of the following characteristics which can be used separately or in partial combination with one another or in total combination with one another: - said pose is determined by an inertial unit; - the calibration parameters of at least one video camera belonging to said perception system are determined, then said pose is determined as a function of said parameters; the determination of said calibration parameters of said at least one video camera comprises the reconstruction of cartography of the environment of said vehicle comprising 3D landmarks modeling the landmarks of the real world, the optimization of at least one map corresponding to a first sequence of images from said at least one video camera, considering at least one extrinsic parameter and / or at least one intrinsic parameter and / or at least one pose parameter and / or a 3D bitter parameter as constant, and optimizing at least one mapping corresponding to a second sequence of images longer than the first sequence of images and including the first sequence of images, by considering said at least one extrinsic parameter and / or said at least one intrinsic parameter and / or said at least one setting parameter and / or a 3D bitter parameter as variable so as to estimate it; - the bitters detected are grouped according to a proximity criterion; - said landmarks are straight segments; - said cost function measures the difference between the detected lines belonging to a group; - the minimization of said cost function is done iteratively using a LevenbergMarcquardt type algorithm; Another object of the invention relates to a computer program comprising software instructions implementing a method as previously defined, when deployed on an information processing device. Another object of the invention relates to a processing device for determining the extrinsic parameters of a perception system on board a vehicle traveling a route and comprising a set of rangefinders providing flow of frames to said processing device, comprising the means software and / or hardware for implementing the process as defined above. Other characteristics and advantages of the invention will appear on reading the following description of a preferred embodiment of the invention, given by way of example and with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 schematically represents an example of a vehicle that can implement the method according to the invention. FIG. 2 schematically represents a functional sequence illustrative of a mode of reahsation of the invention. FIG. 3 schematically represents an example of definition of a plan, according to an embodiment of the invention. FIG. 4 is an example of mapping allowing a schematic view of a result of the minimization step. DETAILED DESCRIPTION OF THE INVENTION The invention applies particularly to autonomous vehicles, also called “guided”, but it can be applied to any other type of vehicle as soon as the problem arises of constructing a cartography of the environment in which the vehicle moves. The invention is also very useful for finding the position of an obstacle detected by the lidar in the vehicle locator or other sensors. Figure 1 shows schematically an example of such a vehicle 100, comprising 4 wheels 101, 102, 103, 104 and a chassis. The vehicle 100 has a perception system comprising 2 video cameras C1, C2 and two LIDAR Ll, L2 rangefinders. Of course, the invention applies to any other arrangement of vehicles, autonomous or not. In particular, as will be seen below, the invention can be applied to perception systems which do not have a video camera, but for example only lidars. These four sensors Cl, C2, Ll, L2 are fixed to the chassis of the vehicle, so that their relative positions and orientations are fixed, or substantially fixed. They are also connected to a processing device 110 which receives the data streams supplied by these sensors. These data streams are sequences of images supplied by the cameras, sequences of frames supplied by the lidars, etc. In the example of Figure 1, each sensor is oriented in a different direction so as to each cover a separate field for a given installation of the vehicle. The fields covered are represented by the dotted lines in the figure. In other arrangements, the fields of the sensors may overlap, but an advantage of the invention is that this characteristic is indifferent. The processing device 110 is provided for determining a map of the environment of the vehicle from the data streams coming from the perception system, and locating the vehicle there. This device therefore implements a SLAM mechanism which will not be described further since it does not fall within the scope of the invention which relates solely to the calibration of the sensors in order to allow the SLAM mechanism to operate with the necessary precision. . In an autonomous or self-guided vehicle, the processing device 110 can be provided to further act on the vehicle's steering system in order to direct it according to the established map. This processing device is typically implemented by coupling between hardware means and software means. In particular, a computer can be programmed by algorithms implementing the steps of the method according to the invention which will be described. As mentioned above, the invention aims to determine the extrinsic calibration of the various sensors, that is to say the position and the relative orientation of the sensors with respect to each other, or with respect to a common reference which can be that of the vehicle. In the following, the calibration of the intrinsic parameters of the sensors is not considered. A geometrical transformation makes it possible to pass from the reference mark of a sensor to that of another sensor. This transformation can be broken down into a translation defined by three parameters, and into a rotation, also defined by three parameters. The extrinsic calibration of a perception system therefore consists in determining a set of six parameters per sensor, defining the position and the orientation of each sensor relative to the vehicle 100 itself. Thus, the data streams coming from each sensor can be processed as a function of the extrinsic calibration parameters of the sensor in question in order to provide relevant information in a common benchmark. In particular, cross-checking between this information is then possible and can make it possible to build the mapping using a SLAM mechanism. In order to know all the extrinsic calibration parameters, a calibration phase is necessary. According to the invention, this phase can be carried out without human intervention, or even during normal operation of the vehicle. Thus, a parameter refresh phase can be periodically triggered without interrupting the operation of the vehicle, in order to compensate for any divergence of these parameters. It is also possible to drive the vehicle "manually" along a learning route. This step then requires human intervention, but this is limited to ίο driving the vehicle, the actual calibration being carried out automatically during the journey. FIG. 2 diagrams the steps of this calibration method according to an embodiment of the invention. Step 201 corresponds to driving the vehicle along a route within the environment to be mapped. In order to be able to determine, the extrinsic calibration parameters, in fact, it is necessary for the vehicle to cover a certain route in the route, corresponding to constraints which depend on the different implementations of the invention. In particular, the path which the vehicle must take may advantageously comprise at least one loop and one half-turn. This allows the accuracy of the calibration to be increased substantially, by observing the same bitters repeatedly (in slightly different poses). In the context of an implementation using video cameras, this constraint is all the more important since the cameras do not have overlapping fields. Otherwise, it is possible to override this constraint. During this journey, the perception system puts two tasks into action. A first task, referenced 203 in FIG. 2, consists in acquiring a flow of frames supplied by range finders, in particular of the LIDAR type. A second task consists in determining the pose of the vehicle as a function of time. The acquisitions of the sensors are generally sampled, so that the pose of the vehicle is determined for discrete moments. Several implementations are possible allowing the determination of the installation of the vehicle as a function of time. According to a first implementation, the installation is determined from non-environmental sensors, that is to say measuring information originating from the operation of the vehicle itself: sensors on the wheels, current consumption of the motors, etc. Preferably, an inertial unit can be used to measure the displacements of the vehicle, in translation and in rotation. From the measurement of these displacements, the pose can be directly supplied. According to a second implementation, the extrinsic and intrinsic parameters of at least one video camera C1, C2 belonging to the perception system are determined, then the pose is determined as a function of these parameters. This second implementation corresponds to steps 202, 204 and 205 of the figure 2. In reference 202, the perception system acquires video streams, made up of sequences of digital images, coming from video sensors. It is also assumed, in this embodiment, that the video cameras C1, C2 are synchronized with each other, that is to say that they take images at the same times, the taking being able to be controlled by clocks synchronized with each other or by a common clock. These shooting moments each correspond to a pose of the vehicle (or of the perception system, which amounts to the same insofar as they are linked by a rigid connection). Lidars may not be synchronized. Generally, according to the state of the art, the range finders, in particular lidar, and the video cameras are calibrated separately. According to this implementation of the invention, on the contrary, it is intended to link the two problems and, in particular, to take advantage of the calibration of the cameras for the calibration of the lidars. The calibration of extrinsic and intrinsic parameters of cameras C1, C2 can be obtained in different ways. According to one embodiment, a construction step 204 of a mapping of the environment is implemented, that is to say a mechanism of the SLAM type (“Simultaneous Localization and Mapping”). This mapping consists in determining bitters 3D, characterized by 3D bitter parameters, from digital images of video streams from the cameras, when the vehicle is moving along the route. Each image corresponds, as it was said previously, to a pose of the vehicle, characterized by pose parameters. These bitters 3 model the bitters of the real world, that is to say that they form an approximation which we aim to be as fair as possible. In accordance with an SLAM type algorithm, the cartography obtained includes 3D bitter parameters and pose parameters. A pose is defined by three translation parameters and three rotation parameters in the global or local Euclidean coordinate system (relative to a neighboring pose). The construction of the cartography comprises stages of reconstruction of cartography of image sequences and optimization of these cartographies by the implementation of beam adjustment algorithms. The method for calibrating the extrinsic parameters of the cameras comprises optimizing at least one mapping corresponding to a first sequence of images by considering at least one extrinsic parameter and / or at least one exposure parameter and / or one parameter d bitter 3D as constant and the optimization of at least one mapping corresponding to a second sequence of images longer than the first sequence of images and including the first sequence of images, considering said at least one extrinsic parameter and / or said at least one setting parameter and / or said at least one 3D bitter parameter as a variable so as to estimate it. In one embodiment, the images are grouped into elementary sequences, each elementary sequence comprising a number X of neighboring images with an overlap of Y images between two successive elementary sequences (two successive elementary sequences have Y images in common). In one embodiment each elementary sequence comprises X = 3 images with an overlap of Y = 2 images. By “neighboring images” is meant images corresponding to poses close to the perception system 110, so that these images overlap and can have 2D bitters capable of being matched. We recall that we call "bitter 2D" a form characteristic of the environment as perceived by video cameras, and which is therefore a perception of a bitter 3D that we are trying to build to establish the cartography. The calibration process then consists in constructing a cartography (partial or elementary) from each elementary sequence S, The cartography of the first elementary sequence can be calculated by epipolar geometry and triangulation of the landmarks, then those of the following elementary sequences are calculated step by step starting from the first mapping. The epipolar geometry calculation makes it possible to determine the poses of the vehicle corresponding to the images of the first elementary sequence. The triangulation makes it possible to determine the parameters of the 3D landmarks (ie the three-dimensional coordinates for 3D landmarks corresponding to the 2D landmarks matched between the images of the first elementary sequence. The calculation of the epipolar geometry is carried out in a known manner, for example by identification of characteristic 2D landmarks in the images of the sequence, for example by the Harris wedge method, pairing of the characteristic 2D landmarks between the images of the sequence, calculation of the poses of the computer vision system 10, for example by implementation of a RANSAC type algorithm on two exposures and extrapolation of the third exposure. The calculation of the triangulation of the paired 2D bitters can be carried out in a known manner, for example by a midpoint method. Triangulation makes it possible to obtain 3D bitters, characterized by three-dimensional coordinates in the global Euclidean coordinate system. For the following elementary sequences, the calculation of the mapping includes the detection of the 2D landmarks in the additional image (s) of the sequence compared to the previous one, then the determination of the corresponding pose of the perception system from these 2D landmarks we match with those already calculated in the cartography of the previous elementary sequence. Finally, the additional 2D landmarks are triangulated. All the elementary sequences are thus reconstructed step by step. Each mapping of each elementary sequence can then be optimized by implementing a beam adjustment algorithm. A beam adjustment algorithm is an iterative algorithm making it possible to optimize the various parameters entering into the calculation of the mapping, by convergence of a criterion which is generally the minimization of a cost function. The parameters used in the calculation of a cartography from images taken by a set of cameras include the 3D bitter parameters, the parameters of the poses of the computer perception system, the extrinsic parameters of the computer vision system and the intrinsic parameters of the computer vision system. A parameter considered as variable during the optimization will be estimated and a parameter considered as constant or fixed will not be optimized or estimated. The calibration process thus includes optimizing the mapping of each elementary sequence by adjusting beams by considering the extrinsic parameters as constant. Then, the elementary sequence maps are aggregated to obtain the mapping of the complete sequence. This process is more fully described, with several possible implementations, in patent application WO2013 / 053701, entitled "Method for calibrating a vision system by computer embedded on a mobile". It is also described in the article “Fast callibration of embedded non-overlapping cameras”, by Pierre Lébraly, Eric Royer, Omar Ait-Aider, Clément Deymier and Michel Dhome, in IEEE International Conférence on Robotics and Automation (ICRA), 2011 . This process allows the intrinsic parameters and the extrinsic parameters to be calibrated simultaneously. However, other methods can be used to calibrate the cameras. Once the calibration parameters have been determined, at the end of step 204, the poses of the vehicle as a function of time can be easily determined, in a step 205. In the embodiment described, in fact, the poses are determined along with the calibration parameters for each image. The images being dated, we deduce the pose as a function of time. The method of the invention furthermore comprises the detection of landmarks, 206, within the flow of frames coming from the rangefinders (LIDAR, etc.), then the determination of the extrinsic parameters, 207, minimizing a cost function taking into account the position parameters of these bitters within a frame and a pose of the interpolated vehicle for a time corresponding to that of said frame. The invention makes it possible not to impose a synchronization between the acquisition of LIDAR frames and the acquisition of video images, which would in practice be a very strong constraint, difficult to implement. For this, an interpolation is implemented to match the dates of acquisition of the lidar frames to the dates of the exposures (determined by the acquisitions of the video images). According to one embodiment of the invention, the pose of the vehicle is first calculated at the instant corresponding to that of the frame acquired. By interpolation we thus replace the frames in the "world" as perceived by the cameras. Another embodiment consists on the contrary of interpolating the lidar frames in order to make them correspond to the dates of the poses. In each frame, in a step referenced 206 in FIG. 2, it is sought to detect bitters, that is to say characteristic shapes. According to one embodiment, these bitters are rectilinear shapes. In particular, these can be line segments. In the example of FIG. 4 which will be explained later, the lidars scan a horizontal plane and the bitters detected are therefore horizontal lines, but other implementations are possible. These horizontal lines may correspond to the intersection of the plane scanned by LIDAR with vertical obstacles in the real world, in particular walls. For a good calibration, it is necessary that the vehicle journey takes place in an environment with a sufficient number of walls. It is obvious that, depending on the contexts in which the method according to the invention is applied, other characteristic forms may be used. In each frame, we can extract the lines using a decomposition / fusion algorithm (or “split and merge” in English), as for example written in the article “A comparison ofline extraction algorithms using 2D laser rangefinder for indoor mobile robotics ”by Viet Nguyen, Agostino Martinelli, Nicola Tomatis and Roland Siegwart, in Proceedings of the IEEE / RSJ International Conférence on Intelligent Robots and Systems, IROS 2006. Since, according to this embodiment, the lines extracted from the lidar frames are the intersection of the lidar plane with vertical planes, it is not possible to determine the vertical position of the lidars, that is to say their heights. A line is detected within a frame if it meets a predetermined criterion. According to one embodiment, this criterion is made up of two conditions: the line contains at least a certain number of points, for example 50 points, and, no point is more than a given distance (for example 6 cm) from the best line passing in the point cloud. All the lines thus extracted from the LIDAR frames are grouped in sets of line 2. Each line is defined by its two end points and a normal vector pointing in the direction of observation. This last element makes it possible to avoid false associations between, for example, the front face and the rear face of the same wall. In a next step, the sets of lines 2 are subdivided into groups or clusters (“clusters” in English), according to the principle that if two detected lines are close enough to each other, in distance and orientation, there it is very likely that these are two observations of the same line from the real world. It is therefore a question of associating the detected landmarks which correspond to the same real world landmark. The aim of this step is to obtain a single group Πι for each vertical plane i observed during the course of the vehicle. In the following, each of the IL groups will be called "plan" and the plans will be associated with a respective Cartesian equation for the subsequent step of minimizing a cost function. When the clustering is finished, each plane equation can be initialized with the value of the vertical plane equation which best corresponds to the line equations associated with it. More precisely and for example, the vertical plane can be defined by a point (belonging to the plane) and two vectors forming a base of the plane. One of the two vectors is a vertical oriented vector of the real world; the other vector is the average of the directing vectors of all the line segments associated with the plane. The point is the barycenter of the set of midpoints of all the segments associated with the plane. As an illustration, an example of implementation of the steps for creating and merging Πι groups can be given in the form of a pseudo-language algorithm: Creation of groups REPEAT the random element of set 2 REMOVE p from set 2 Π1 ^ {11} FOR EVERY £ IF 1 and li are close enough THEN Πί <- Πιθ {1} REMOVE 1 from set 2 END IF END FOR i <- i + 1 UNTIL 2 is empty Merging groups REPEAT FOR ALL i, j WITH i <j DO IF E (m, n) ellixllj such that m and n are sufficiently close SO Merge IL and Ilj END IF END FOR UNTIL MORE merger is not possible. The conditions for two lines li and I2 to be "close enough" can be as follows: the angle between lines li and I2 is less than a given threshold, for example 15 °; the Euchdian distance between the line segments is less than a given threshold, for example 1 min; the dot product between the normal vectors associated with lines li and I2 is positive. As indicated above, this makes it possible to avoid combining front and rear surfaces of the same wall; In addition, in the case where the poses are determined by a visual SLAM algorithm, an additional condition can be that the lines have been observed according to two poses which are connected in the graph of the poses resulting from the SLAM algorithm (step 204) . In other words, the two poses share the same detected visual characteristic. Then, in a step referenced 207 in FIG. 2, the extrinsic parameters of the lidars are determined, which minimize a cost function measuring the differences between the detected landmarks associated with the same real world landmark. More precisely, it can measure the difference between the detected lines belonging to the same group (or "cluster") from step 206. This cost function takes into account the position parameters of these bitters within a frame and the corresponding pose of the vehicle, interpolated for the same date. According to one embodiment of the invention, this minimization can be done iteratively using, for example, the Levenberg-Marquardt algorithm. The Levenberg-Marquardt algorithm, or LM algorithm, provides a numerical solution to the problem of minimizing a function, often non-linear and dependent on several variables. The algorithm interpolates the GaussNewton algorithm and the gradient algorithm. More stable than that of Gauss-Newton, it finds a solution even if it is started very far from a minimum. This algorithm is conventional and well known to those skilled in the art. The online encyclopedia Wikipedia provides explanations, but it is also possible to refer to libraries of algorithms, or to fundamental articles: K. Levenberg, "A Method for the Solution of Certain Problems in Least Squares", in Quart. Appl. Math. 2, 1944, p. 164-168 D. Marquardt, "An Algorithm for Least-Squares Estimation of Nonlinear Parameters", in SIAM J. Appl. Math. 11, 1963, p. 431-441 Other algorithms close to or based on the Levenberg-Marquardt algorithm are obviously also usable and accessible to those skilled in the art. We can then seek to determine on the one hand the installation of each of the lidars of the supervision system, ie 5 extrinsic parameters per lidar; as well as the equation of the vertical planes. One of the additional advantages of the method is that since the equations of the plans are considered as unknowns of the problem, it is not necessary to have a priori conditions on the environment in which the vehicle operates. We consider that a plane Ilj is associated with nj line segments lg with 1 <i <n 3 Each segment is defined by these two ends Mi, j and Nij and the plane flj is defined by its Cartesian equation. A possible cost FO function can be: [Lidars' Plans) = ZZkH.nJ + £ / (7V, J , nJ) j i <i <n } in which d (P, ü) is the Euclidean distance between the point P and the plane Π. This cost function is calculated from two families of parameters which respectively define the extrinsic parameters of the lidars and the equation of the plans. Minimizing this function makes it possible to obtain the parameters of the lidars for which the differences between each line segment [Mg, Ng] and the flj plane modeling the associated bitter in the real world is minimal. These plans are not known, they are also part of the parameters to be determined during the minimization process. Different implementations are possible to minimize this function, and also to define the vertical planes. However, as mentioned above, a direct application of the Levenberg-Marquardt algorithm makes it possible to solve such a case of optimization of a defined cost function, as here, of a sum of squared distances. A simple method of defining vertical planes is to consider that the verticals are perfectly vertical. Consequently, it is possible to define a vertical plane by only two parameters which can be the distance do between the origin of a reference mark and a point A closest to this origin O, as well as an angle Θ between abscissa axis and the oriented segment linking the origin O to this point A. Figure 3 schematizes this possible definition of a vertical plane and presents a projection of the plane Π on the horizontal plane formed by the axes of the abscissae X and ordinates Y. The equation of a vertical plane Π can then be written: x.cos (6) + y.sin (6) -do = 0 This assumption about exact verticality can impact the accuracy of the calibration. Even if the ground seems flat and horizontal, the position of the cameras can cause a slight tilt. Another implementation consists in adding two other parameters, α, β, which define this vertical axis V, by ν = (α, β, 1) τ . These two parameters can be initialized to zero and be adapted during the iterative minimization process. The equation of a vertical plane Π can then be written: x.cos (6) + y.sin (6) + z (-oc.cos (6) ^. Sin (6)) - do = 0 In this entry, the parameters α, β are common to all the planes while the parameters Θ and do are specific to each plane. Thus the cost function involves 5xN + 2xM + 2 parameters (M being the number of planes and N the number of lidars). Figure 4 illustrates the result of this minimization step graphically. In this scene corresponding to an extract from a cartography, the clear lines represent the landmarks detected by the perception system, and the gray lines represent the "modeled" walls, that is to say the projection of the vertical planes Π. On the left side, we see that each wall is detected each by a group of more or less thick lines. The thickness of these groups is the result of the poor calibration of the lidars. Within each group, minimization determines the calibration parameters that minimize the thickness of these groups, as it appears on the right side corresponding to the same scene. The method according to the invention applies particularly well to a perception system allowing a vehicle to detect obstacles, in particular walls. It is particularly effective when the lidars scan a horizontal plane of the surrounding space. Experimental tests demonstrate that the method according to the invention allows the placement of the lidars to be recovered with an accuracy of the order of 2 cm for the position and 2 ° for the orientation. Of course, the present invention is not limited to the examples and to the embodiment described and shown, but it is susceptible of numerous variants accessible to those skilled in the art.
权利要求:
Claims (10) [1" id="c-fr-0001] 1. Method for determining the extrinsic parameters of a perception system on board a vehicle (100) traveling a route and comprising a set of rangefinders (L1, L2) supplying frame flows to a processing device (110), said device process comprising determining the pose of said vehicle as a function of time, and being characterized in that it comprises the detection of bitters within said flow of frames corresponding to bitters in the real world, then the determination of extrinsic parameters minimizing a cost function measuring the differences between the detected landmarks associated with the same real world landmark. [2" id="c-fr-0002] 2. Method according to the preceding claim, wherein said pose is determined by an inertial unit. [3" id="c-fr-0003] 3. Method according to claim 1, in which the calibration parameters of at least one video camera (C1, C2) belonging to said perception system are determined, then said pose is determined as a function of said parameters. [4" id="c-fr-0004] 4. Method according to the preceding claim, in which the determination of said calibration parameters of said at least one video camera comprises the reconstruction of cartography of the environment of said vehicle comprising 3D landmarks modeling the real world landmarks, the optimization of '' at least one mapping corresponding to a first sequence of images from said at least one video camera, by considering at least one extrinsic parameter and / or at least one intrinsic parameter and / or at least one setting parameter and / or one 3D bitter parameter as constant, and the optimization of at least one mapping corresponding to a second sequence of images longer than the first sequence of images and including the first sequence of images, considering said at least one extrinsic parameter and / or said at least one intrinsic parameter and / or said at least one setting parameter and / or a bitter parameter 3D as a variable to estimate it. [5" id="c-fr-0005] 5. Method according to one of the preceding claims, in which the detected landmarks are grouped according to a proximity criterion. [6" id="c-fr-0006] 6. Method according to one of the preceding claims, wherein said landmarks are straight segments [7" id="c-fr-0007] 7. Method according to claims 5 and 6, wherein said cost function measures the difference between the detected lines belonging to a group. [8" id="c-fr-0008] 8. Method according to one of the preceding claims, in which the minimization of said cost function is done iteratively using a LevenbergMarcquardt type algorithm. [9" id="c-fr-0009] 9. Computer program comprising software instructions implementing a method according to one of the preceding claims, when deployed on an information processing device. [10" id="c-fr-0010] 10. Processing device (110) for determining the extrinsic parameters of a perception system on board a vehicle (100) traveling a route and comprising a set of rangefinders (L1, L2) supplying streams of frames to said device processing (110), comprising the software and / or hardware means for implementing the method according to one of claims 1 to 8. 1/2
类似技术:
公开号 | 公开日 | 专利标题 FR3062507A1|2018-08-03|METHOD AND DEVICE FOR CALIBRATING A PERCEPTION SYSTEM COMPRISING A SET OF LIDAR TELEMETERS Zhang et al.2014|LOAM: Lidar Odometry and Mapping in Real-time. Zlot et al.2014|Efficient large‐scale three‐dimensional mobile mapping for underground mines Tao et al.2013|Mapping and localization using GPS, lane markings and proprioceptive sensors US20150356357A1|2015-12-10|A method of detecting structural parts of a scene Ben-Afia et al.2014|Review and classification of vision-based localisation techniques in unknown environments US20120243775A1|2012-09-27|Wide baseline feature matching using collobrative navigation and digital terrain elevation data constraints US11237004B2|2022-02-01|Log trajectory estimation for globally consistent maps Wei et al.2011|GPS and stereovision-based visual odometry: Application to urban scene mapping and intelligent vehicle localization JP6432825B2|2018-12-05|Method and apparatus for aligning three-dimensional point cloud data and moving body system thereof TW202001924A|2020-01-01|Systems and methods for updating a high-resolution map based on binocular images EP2766872B1|2015-07-29|Method of calibrating a computer-based vision system onboard a craft EP3144881A1|2017-03-22|Method creating a 3-d panoramic mosaic of a scene Kunz et al.2013|Map building fusing acoustic and visual information using autonomous underwater vehicles Zienkiewicz et al.2015|Extrinsics autocalibration for dense planar visual odometry Barfoot et al.2016|Into darkness: Visual navigation based on a lidar-intensity-image pipeline TWI693422B|2020-05-11|Integrated sensor calibration in natural scenes Jende et al.2018|A fully automatic approach to register mobile mapping and airborne imagery to support the correction of platform trajectories in GNSS-denied urban areas WO2012175888A1|2012-12-27|Generation of map data Zhou et al.2021|Tightly-coupled camera/LiDAR integration for point cloud generation from GNSS/INS-assisted UAV mapping systems Zhu et al.2021|Camvox: A low-cost and accurate lidar-assisted visual slam system Fanta‐Jende et al.2019|Co‐registration of panoramic mobile mapping images and oblique aerial images Roh et al.2017|Aerial image based heading correction for large scale SLAM in an urban canyon Blaser et al.2018|On a novel 360 panoramic stereo mobile mapping system Oliveira et al.2015|Height gradient approach for occlusion detection in UAV imagery
同族专利:
公开号 | 公开日 WO2018142057A1|2018-08-09| FR3062507B1|2020-05-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 FR2981185B1|2011-10-10|2014-02-14|Univ Blaise Pascal Clermont Ii|METHOD OF CALIBRATING A COMPUTER VISION SYSTEM ON A MOBILE|EP3699630A1|2019-02-25|2020-08-26|KNORR-BREMSE Systeme für Nutzfahrzeuge GmbH|System and method for compensating a motion of a vehicle component| US11117591B2|2019-05-08|2021-09-14|Pony Ai Inc.|System and method for recalibration of an uncalibrated sensor| US10726579B1|2019-11-13|2020-07-28|Honda Motor Co., Ltd.|LiDAR-camera calibration| US11210535B1|2020-08-17|2021-12-28|Ford Global Technologies, Llc|Sensor fusion|
法律状态:
2017-11-24| PLFP| Fee payment|Year of fee payment: 2 | 2018-08-03| PLSC| Search report ready|Effective date: 20180803 | 2020-01-29| PLFP| Fee payment|Year of fee payment: 4 | 2021-01-29| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1750802|2017-01-31| FR1750802A|FR3062507B1|2017-01-31|2017-01-31|METHOD AND DEVICE FOR THE CALIBRATION OF A COLLECTION SYSTEM INCLUDING A SET OF LIDAR TELEMETERS|FR1750802A| FR3062507B1|2017-01-31|2017-01-31|METHOD AND DEVICE FOR THE CALIBRATION OF A COLLECTION SYSTEM INCLUDING A SET OF LIDAR TELEMETERS| PCT/FR2018/050210| WO2018142057A1|2017-01-31|2018-01-30|Method and device for calibrating a perception system including a set of lidar rangefinders| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|